197 research outputs found

    SOCR: Statistics Online Computational Resource

    Get PDF
    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

    Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data

    Full text link
    Abstract Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be ‘team science’.http://deepblue.lib.umich.edu/bitstream/2027.42/134522/1/13742_2016_Article_117.pd

    SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Get PDF
    The web-based, Java-written SOCR (Statistical Online Computational Resource) tools have been utilized in many undergraduate and graduate level statistics courses for seven years now (Dinov 2006; Dinov et al. 2008b). It has been proven that these resources can successfully improve students' learning (Dinov et al. 2008b). Being first published online in 2005, SOCR Analyses is a somewhat new component and it concentrate on data modeling for both parametric and non-parametric data analyses with graphical model diagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learning for high school and undergraduate students. As we have already implemented SOCR Distributions and Experiments, SOCR Analyses and Charts fulfill the rest of a standard statistics curricula. Currently, there are four core components of SOCR Analyses. Linear models included in SOCR Analyses are simple linear regression, multiple linear regression, one-way and two-way ANOVA. Tests for sample comparisons include t-test in the parametric category. Some examples of SOCR Analyses' in the non-parametric category are Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirnoff test and Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman's test and Fisher's exact test. The last component of Analyses is a utility for computing sample sizes for normal distribution. In this article, we present the design framework, computational implementation and the utilization of SOCR Analyses.

    SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Get PDF
    The web-based, Java-written SOCR (Statistical Online Computational Resource) toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b). It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b). Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses

    P3‐104: Gene‐Brain Structure Networking Analysis In Alzheimer’S Disease Using The Pipeline Environment

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/152823/1/alzjjalz2019063132.pd

    Hypothesis: Caco‐2 cell rotational 3D mechanogenomic turing patterns have clinical implications to colon crypts

    Full text link
    Colon crypts are recognized as a mechanical and biochemical Turing patterning model. Colon epithelial Caco‐2 cell monolayer demonstrated 2D Turing patterns via force analysis of apical tight junction live cell imaging which illuminated actomyosin meshwork linking the actomyosin network of individual cells. Actomyosin forces act in a mechanobiological manner that alters cell/nucleus/tissue morphology. We observed the rotational motion of the nucleus in Caco‐2 cells that appears to be driven by actomyosin during the formation of a differentiated confluent epithelium. Single‐ to multi‐cell ring/torus‐shaped genomes were observed prior to complex fractal Turing patterns extending from a rotating torus centre in a spiral pattern consistent with a gene morphogen motif. These features may contribute to the well‐described differentiation from stem cells at the crypt base to the luminal colon epithelium along the crypt axis. This observation may be useful to study the role of mechanogenomic processes and the underlying molecular mechanisms as determinants of cellular and tissue architecture in space and time, which is the focal point of the 4D nucleome initiative. Mathematical and bioengineer modelling of gene circuits and cell shapes may provide a powerful algorithm that will contribute to future precision medicine relevant to a number of common medical disorders.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146665/1/jcmm13853.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146665/2/jcmm13853_am.pd

    Classifying migraine using PET compressive big data analytics of brain’s μ-opioid and D2/D3 dopamine neurotransmission

    Get PDF
    Introduction: Migraine is a common and debilitating pain disorder associated with dysfunction of the central nervous system. Advanced magnetic resonance imaging (MRI) studies have reported relevant pathophysiologic states in migraine. However, its molecular mechanistic processes are still poorly understood in vivo. This study examined migraine patients with a novel machine learning (ML) method based on their central μ-opioid and dopamine D2/D3 profiles, the most critical neurotransmitters in the brain for pain perception and its cognitive-motivational interface.Methods: We employed compressive Big Data Analytics (CBDA) to identify migraineurs and healthy controls (HC) in a large positron emission tomography (PET) dataset. 198 PET volumes were obtained from 38 migraineurs and 23 HC during rest and thermal pain challenge. 61 subjects were scanned with the selective μ-opioid receptor (μOR) radiotracer [11C]Carfentanil, and 22 with the selective dopamine D2/D3 receptor (DOR) radiotracer [11C]Raclopride. PET scans were recast into a 1D array of 510,340 voxels with spatial and intensity filtering of non-displaceable binding potential (BPND), representing the receptor availability level. We then performed data reduction and CBDA to power rank the predictive brain voxels.Results: CBDA classified migraineurs from HC with accuracy, sensitivity, and specificity above 90% for whole-brain and region-of-interest (ROI) analyses. The most predictive ROIs for μOR were the insula (anterior), thalamus (pulvinar, medial-dorsal, and ventral lateral/posterior nuclei), and the putamen. The latter, putamen (anterior), was also the most predictive for migraine regarding DOR D2/D3 BPND levels.Discussion: CBDA of endogenous μ-opioid and D2/D3 dopamine dysfunctions in the brain can accurately identify a migraine patient based on their receptor availability across key sensory, motor, and motivational processing regions. Our ML-based findings in the migraineur’s brain neurotransmission partly explain the severe impact of migraine suffering and associated neuropsychiatric comorbidities

    Numerical methods for computing the discrete and continuous Laplace transforms

    Full text link
    We propose a numerical method to spline-interpolate discrete signals and then apply the integral transforms to the corresponding analytical spline functions. This represents a robust and computationally efficient technique for estimating the Laplace transform for noisy data. We revisited a Meijer-G symbolic approach to compute the Laplace transform and alternative approaches to extend canonical observed time-series. A discrete quantization scheme provides the foundation for rapid and reliable estimation of the inverse Laplace transform. We derive theoretic estimates for the inverse Laplace transform of analytic functions and demonstrate empirical results validating the algorithmic performance using observed and simulated data. We also introduce a generalization of the Laplace transform in higher dimensional space-time. We tested the discrete LT algorithm on data sampled from analytic functions with known exact Laplace transforms. The validation of the discrete ILT involves using complex functions with known analytic ILTs

    Multimodal, Multidimensional Models of Mouse Brain

    Get PDF
    Naturally occurring mutants and genetically manipulated strains of mice are widely used to model a variety of human diseases. Atlases are an invaluable aid in understanding the impact of such manipulations by providing a standard for comparison and to facilitate the integration of anatomic, genetic, and physiologic observations from multiple subjects and experiments. We have developed digital atlases of the C57BL/6J mouse brain (adult and neonate) as comprehensive frameworks for storing and accessing the myriad types of information about the mouse brain. Along with raw and annotated images, these contain database management systems and a set of tools for comparing information from different techniques and different animals. Each atlas establishes a canonical representation of the mouse brain and provides the tools for the manipulation and analysis of new data. We describe both these atlases and discuss how they may be put to use in organizing and analyzing data from mouse models of epilepsy
    corecore